Self-Driving Car Engineer Nanodegree

Deep Learning

Project: Build a Traffic Sign Recognition Classifier

In this notebook, a template is provided for you to implement your functionality in stages which is required to successfully complete this project. If additional code is required that cannot be included in the notebook, be sure that the Python code is successfully imported and included in your submission, if necessary. Sections that begin with 'Implementation' in the header indicate where you should begin your implementation for your project. Note that some sections of implementation are optional, and will be marked with 'Optional' in the header.

In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide.

Note: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.


Step 0: Load The Data

In [1]:
# Load pickled data
import pickle

# TODO: Fill this in based on where you saved the training and testing data

training_file = 'train.p'
testing_file = 'test.p'

with open(training_file, mode='rb') as f:
    train = pickle.load(f)
with open(testing_file, mode='rb') as f:
    test = pickle.load(f)
    
X_train, y_train = train['features'], train['labels']
X_test, y_test = test['features'], test['labels']

Step 1: Dataset Summary & Exploration

The pickled data is a dictionary with 4 key/value pairs:

  • 'features' is a 4D array containing raw pixel data of the traffic sign images, (num examples, width, height, channels).
  • 'labels' is a 2D array containing the label/class id of the traffic sign. The file signnames.csv contains id -> name mappings for each id.
  • 'sizes' is a list containing tuples, (width, height) representing the the original width and height the image.
  • 'coords' is a list containing tuples, (x1, y1, x2, y2) representing coordinates of a bounding box around the sign in the image. THESE COORDINATES ASSUME THE ORIGINAL IMAGE. THE PICKLED DATA CONTAINS RESIZED VERSIONS (32 by 32) OF THESE IMAGES

Complete the basic data summary below.

In [2]:
### Replace each question mark with the appropriate value.

# TODO: Number of training examples
n_train = len(train['features'])

# TODO: Number of testing examples.
n_test = len(test['features'])

# TODO: What's the shape of an traffic sign image?
image_shape = train['features'].shape[1:]

# TODO: How many unique classes/labels there are in the dataset.
n_classes = len(set(train['labels']))

print("Number of training examples =", n_train)
print("Number of testing examples =", n_test)
print("Image data shape =", image_shape)
print("Number of classes =", n_classes)
Number of training examples = 39209
Number of testing examples = 12630
Image data shape = (32, 32, 3)
Number of classes = 43

Visualize the German Traffic Signs Dataset using the pickled file(s). This is open ended, suggestions include: plotting traffic sign images, plotting the count of each sign, etc.

The Matplotlib examples and gallery pages are a great resource for doing visualizations in Python.

NOTE: It's recommended you start with something simple first. If you wish to do more, come back to it after you've completed the rest of the sections.

In [3]:
### Data exploration visualization goes here.
### Feel free to use as many code cells as needed.
import matplotlib.pyplot as plt
# Visualizations will be shown in the notebook.
%matplotlib inline
In [4]:
from random import random
import numpy as np
import tensorflow as tf
In [5]:
import os
In [6]:
for (label, freq) in zip(range(n_classes), np.bincount(train['labels'])):
    print (label, freq)
0 210
1 2220
2 2250
3 1410
4 1980
5 1860
6 420
7 1440
8 1410
9 1470
10 2010
11 1320
12 2100
13 2160
14 780
15 630
16 420
17 1110
18 1200
19 210
20 360
21 330
22 390
23 510
24 270
25 1500
26 600
27 240
28 540
29 270
30 450
31 780
32 240
33 689
34 420
35 1200
36 390
37 210
38 2070
39 300
40 360
41 240
42 240
In [7]:
plt.bar(range(n_classes), np.bincount(train['labels']))
Out[7]:
<Container object of 43 artists>
In [8]:
# create a dictionary of labels
from collections import defaultdict
label_dict = {}
label_dict = defaultdict(lambda: [], label_dict)
N = len(train['features'])
for i in range(N):
    label_dict[train['labels'][i]].append(i)
In [9]:
# select a few plots randomly from each class and plot
n_cols = 8
plt.figure(figsize=[20, 100])
for i in range(n_classes):
    for j in range(n_cols):
        plt.subplot(n_classes, n_cols, i*n_cols+j+1)
        n = int(random() * len(label_dict[i]))
        image = train['features'][label_dict[i][n]]
        plt.imshow(image)
        plt.axis('off')
In [10]:
from scipy import ndimage, signal
In [11]:
def rgb2gray(rgb):
    return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
In [12]:
# effect of image transformations
image = train['features'][label_dict[5][100]]
image = rgb2gray(image)
plt.figure(figsize=[20, 6])
plt.subplot(141)
plt.imshow(image, cmap='gray')
plt.subplot(142)
plt.imshow(ndimage.rotate(image, 20, reshape=False, mode='nearest'), cmap='gray')
plt.subplot(143)
wiener_image = signal.wiener(image, (2, 2))
median_image = ndimage.median_filter(image, size=2)
plt.imshow(median_image, cmap='gray')
plt.subplot(144)
plt.imshow(ndimage.shift(image, (5, 5), mode='nearest'), cmap='gray')
Out[12]:
<matplotlib.image.AxesImage at 0x1409ef7f0>

Step 2: Design and Test a Model Architecture

Design and implement a deep learning model that learns to recognize traffic signs. Train and test your model on the German Traffic Sign Dataset.

There are various aspects to consider when thinking about this problem:

  • Neural network architecture
  • Play around preprocessing techniques (normalization, rgb to grayscale, etc)
  • Number of examples per label (some have more than others).
  • Generate fake data.

Here is an example of a published baseline model on this problem. It's not required to be familiar with the approach used in the paper but, it's good practice to try to read papers like these.

NOTE: The LeNet-5 implementation shown in the classroom at the end of the CNN lesson is a solid starting point. You'll have to change the number of classes and possibly the preprocessing, but aside from that it's plug and play!

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

In [13]:
### Preprocess the data here.
### Feel free to use as many code cells as needed.
In [14]:
from time import time
In [15]:
def rgb2gray(rgb):
    return np.dot(rgb[...,:3], [0.299, 0.587, 0.114])
In [16]:
# http://stackoverflow.com/questions/29831489/numpy-1-hot-array

def convertToOneHot(vector, num_classes=None):
    """
    Converts an input 1-D vector of integers into an output
    2-D array of one-hot vectors, where an i'th input value
    of j will set a '1' in the i'th row, j'th column of the
    output array.

    Example:
        v = np.array((1, 0, 4))
        one_hot_v = convertToOneHot(v)
        print one_hot_v

        [[0 1 0 0 0]
         [1 0 0 0 0]
         [0 0 0 0 1]]
    """

    assert isinstance(vector, np.ndarray)
    assert len(vector) > 0

    if num_classes is None:
        num_classes = np.max(vector)+1
    else:
        assert num_classes > 0
        assert num_classes >= np.max(vector)

    result = np.zeros(shape=(len(vector), num_classes))
    result[np.arange(len(vector)), vector] = 1
    return result.astype(int)
In [17]:
# convert images to grayscale

features_train = rgb2gray(train['features'])
print(train['features'].shape, features_train.shape)
labels_train = train['labels']
print (labels_train.shape)

features_test = rgb2gray(test['features'])
print(test['features'].shape, features_test.shape)
labels_test = test['labels']
print (labels_test.shape)
(39209, 32, 32, 3) (39209, 32, 32)
(39209,)
(12630, 32, 32, 3) (12630, 32, 32)
(12630,)

Question 1

Describe how you preprocessed the data. Why did you choose that technique?

Answer:

I converted the images to grayscale because the color of the image did not seem to be as important as the shape of the signs and it was mentioned in the literature that there was no performance penalty on using grayscale.

I used min-max scaling to cast the pixels in the range (0.1, 0.9) to create a uniform range of pixel values for all the images to improve the network performance.

In [18]:
### Generate data additional data (OPTIONAL!)
### and split the data into training/validation/testing sets here.
### Feel free to use as many code cells as needed.
In [19]:
def min_max_scale(image_data, low=0.1, high=0.9):
    return low + (image_data - np.min(image_data)) * (high - low) / (np.max(image_data) - np.min(image_data))
# generate more training data by transforming the existing images new_images_1 = np.array([ndimage.rotate(image, -10, reshape=False, mode='nearest') for image in features_train]) new_images_2 = np.array([ndimage.rotate(image, 20, reshape=False, mode='nearest') for image in features_train]) new_images_3 = np.array([ndimage.shift(image, (5, 5), mode='nearest') for image in features_train]) new_images_4 = np.array([ndimage.shift(image, (-5, -5), mode='nearest') for image in features_train]) features_train = np.vstack((features_train, new_images_1, new_images_2, new_images_3, new_images_4)) labels_train = np.array(list(labels_train)*5) print ('new training set = ', features_train.shape) print ('new label set = ', labels_train.shape)
In [20]:
# flatten the images
features_train = features_train.reshape(len(features_train), -1)
features_test = features_test.reshape(len(features_test), -1)
In [21]:
# scale the images
features_train = min_max_scale(features_train)
features_test = min_max_scale(features_test)
In [22]:
from sklearn.model_selection import train_test_split
In [23]:
# split train set into training and validation

X_train, X_valid, y_train, y_valid = train_test_split(features_train, labels_train, test_size=0.3, random_state=42)
print (X_train.shape, y_train.shape, X_valid.shape, y_valid.shape)

y_train = convertToOneHot(y_train)
y_valid = convertToOneHot(y_valid)
labels_test = convertToOneHot(labels_test)
(27446, 1024) (27446,) (11763, 1024) (11763,)

Question 2

Describe how you set up the training, validation and testing data for your model. Optional: If you generated additional data, how did you generate the data? Why did you generate the data? What are the differences in the new dataset (with generated data) from the original dataset?

Answer:

In [24]:
### Define your architecture here.
### Feel free to use as many code cells as needed.

I split the training data into in a 70-30 ratio and used the former for training and latter for validation. I used the test set to check for final accuracy only after I settled down on a model.

I generated fake images by translating and rotating the training images but that did not turn out to be useful for the test set accuracy and hence I reverted back to the original dataset only for training.

In [25]:
def conv2d(x, W, b, stride=1, padding='VALID'):
    # Conv2D wrapper, with bias and relu activation
    x = tf.nn.conv2d(x, W, padding=padding, strides=[1, stride, stride, 1])
    x = tf.nn.bias_add(x, b)
    return tf.nn.relu(x)
In [26]:
def maxpool2d(x, k=2, stride=2):
    # MaxPool2D wrapper
    return tf.nn.max_pool(x, ksize=[1, k, k, 1], strides=[1, stride, stride, 1], padding='SAME')
In [27]:
def inception(input_layer):
    # input 13x13x32
    conv_1x1 = conv2d(input_layer, weights['winc1'], biases['binc1'], padding='SAME') # 13x13x16
    conv_3x3 = conv2d(conv_1x1, weights['winc3'], biases['binc3'], padding='SAME') # 13x13x32
    conv_5x5 = conv2d(conv_1x1, weights['winc5'], biases['binc5'], padding='SAME') # 13x13x32
    max_pool = maxpool2d(input_layer, k=2, stride=1) # 13x13x32
    max_pool_conv_1x1 = conv2d(max_pool, weights['winpc1'], biases['binpc1'], padding='SAME') # 13x13x32
    conv_d1x1 = conv2d(input_layer, weights['windc1'], biases['bindc1'], padding='SAME') # 13x13x16
    output_layer = tf.concat(3, [conv_3x3, conv_5x5, conv_d1x1, max_pool_conv_1x1]) # 13x13x112
    return output_layer
In [28]:
def Network(x, dropout):
    
    x = tf.reshape(x, (-1, image_shape[0], image_shape[1], 1))
    
    # Conv Layer 1
    conv1 = conv2d(x, weights['wc1'], biases['bc1']) # 26x26x32
    conv1 = maxpool2d(conv1, k=2) # 13x13x32

    '''
    # Conv Layer 2
    conv2 = conv2d(conv1, weights['wc2'], biases['bc2'])
    conv2 = maxpool2d(conv2, k=2)
    #conv2 = tf.nn.dropout(conv2, dropout)
    '''

    # Inception Layer
    incp1 = inception(conv1) # 13x13x112

    # FC Layer 1
    fc1 = tf.contrib.layers.flatten(incp1)
    fc1 = tf.add(tf.matmul(fc1, weights['wd1']), biases['bd1'])
    fc1 = tf.nn.relu(fc1)
    fc1 = tf.nn.dropout(fc1, dropout)  # 1024

    # FC Layer 2
    fc2 = tf.add(tf.matmul(fc1, weights['wd2']), biases['bd2'])
    fc2 = tf.nn.relu(fc2)
    fc2 = tf.nn.dropout(fc2, dropout)  # 256

    # Out Layer
    out = tf.add(tf.matmul(fc2, weights['out']), biases['out']) # 43
    
    return out
In [29]:
# initialize weights and biases

f = 7
weights = {
    'wc1': tf.get_variable('wc1', shape=([f, f, 1, 32]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'winc1': tf.get_variable('winc1', shape=([1, 1, 32, 16]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'winc3': tf.get_variable('winc3', shape=([3, 3, 16, 32]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'winc5': tf.get_variable('winc5', shape=([5, 5, 16, 32]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'winpc1': tf.get_variable('winpc1', shape=([1, 1, 32, 32]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'windc1': tf.get_variable('widnc1', shape=([1, 1, 32, 16]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'wd1': tf.get_variable('wd1', shape=([13*13*112, 1024]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'wd2': tf.get_variable('wd2', shape=([1024, 256]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'out': tf.get_variable('w_out', shape=([256, n_classes]), initializer=tf.contrib.layers.xavier_initializer_conv2d())
}

biases = {
    'bc1': tf.get_variable('bc1', shape=([32]), initializer=tf.contrib.layers.xavier_initializer()),
    'binc1': tf.get_variable('binc1', shape=([16]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'binc3': tf.get_variable('binc3', shape=([32]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'binc5': tf.get_variable('binc5', shape=([32]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'binpc1': tf.get_variable('binpc1', shape=([32]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'bindc1': tf.get_variable('bindc1', shape=([16]), initializer=tf.contrib.layers.xavier_initializer_conv2d()),
    'bd1': tf.get_variable('bd1', shape=([1024]), initializer=tf.contrib.layers.xavier_initializer()),
    'bd2': tf.get_variable('bd2', shape=([256]), initializer=tf.contrib.layers.xavier_initializer()),
    'out': tf.get_variable('b_out', shape=([n_classes]), initializer=tf.contrib.layers.xavier_initializer())
}
In [30]:
# Create the Graph

X = tf.placeholder(tf.float32, (None, image_shape[0]*image_shape[1]))
y = tf.placeholder(tf.float32, (None, n_classes))
keep_prob = tf.placeholder(tf.float32)

fc2 = Network(X, keep_prob)

loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(fc2, y))
opt = tf.train.AdamOptimizer(learning_rate=1.0e-3)
train_op = opt.minimize(loss_op)
correct_prediction = tf.equal(tf.argmax(fc2, 1), tf.argmax(y, 1))
accuracy_op = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
wrong_prediction = tf.not_equal(tf.argmax(fc2, 1), tf.argmax(y, 1))

Question 3

What does your final architecture look like? (Type of model, layers, sizes, connectivity, etc.) For reference on how to build a deep neural network using TensorFlow, see Deep Neural Network in TensorFlow from the classroom.

Answer:

In [31]:
### Train your model here.
### Feel free to use as many code cells as needed.
Final network structure: convolution with 7x7x32 filter followed by max pooling inception layer fully connected layer of size 1024 fully connected layer of size 256 output
In [32]:
def evaluate(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy, total_loss = 0, 0
    sess = tf.get_default_session()
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        loss, accuracy =  sess.run([loss_op, accuracy_op], feed_dict={X: batch_x, y: batch_y, keep_prob: 1.0})
        total_accuracy += (accuracy * batch_x.shape[0])
        total_loss     += (loss * batch_x.shape[0])
    return total_loss / num_examples, total_accuracy / num_examples
In [33]:
def find_error(X_data, y_data):
    num_examples = len(X_data)
    total_accuracy, total_loss = 0, 0
    sess = tf.get_default_session()
    wrong_list = np.array([])
    for offset in range(0, num_examples, BATCH_SIZE):
        batch_x, batch_y = X_data[offset:offset+BATCH_SIZE], y_data[offset:offset+BATCH_SIZE]
        wrong =  sess.run(wrong_prediction, feed_dict={X: batch_x, y: batch_y, keep_prob: 1.0})
        #print (offset, len(batch_y[wrong].argmax(axis=1)))
        wrong_list = np.append(wrong_list, batch_y[wrong].argmax(axis=1))
    return wrong_list
In [34]:
EPOCHS = 10
dropout_keep_prob = 0.4
BATCH_SIZE = 128
In [35]:
train_acc_epoch = []
train_loss_epoch = []
valid_acc_epoch = []
valid_loss_epoch = []

saver = tf.train.Saver()

with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        #sess.run(tf.initialize_all_variables())
        steps_per_epoch = len(X_train) // BATCH_SIZE
        num_examples = steps_per_epoch * BATCH_SIZE

        # Train model
        train_start_time = time()
        for i in range(EPOCHS):
            start_time = time()
            for step in range(steps_per_epoch):
                batch_x = X_train[step*BATCH_SIZE:(step+1)*BATCH_SIZE,...]
                batch_y = y_train[step*BATCH_SIZE:(step+1)*BATCH_SIZE]
                loss = sess.run(train_op, feed_dict={X: batch_x, y: batch_y, keep_prob: dropout_keep_prob})
                
            tra_loss, tra_acc = evaluate(X_train, y_train)
            train_acc_epoch.append(tra_acc)
            train_loss_epoch.append(tra_loss)

            val_loss, val_acc = evaluate(X_valid, y_valid)
            valid_acc_epoch.append(val_acc)
            valid_loss_epoch.append(val_loss)

            print("EPOCH {} ...".format(i+1))
            print("Training loss = {:.3f}   Validation loss = {:.3f}".format(tra_loss, val_loss))
            print("Training accuracy = {:.3f}   Validation accuracy = {:.3f}".format(tra_acc, val_acc))
            print("time taken = {:.1f} s".format(time() - start_time))
            print()
            
        # save the model
        saver.save(sess, "my_model")
            
        # Evaluate on the test data
        
        test_loss, test_acc = evaluate(features_test, labels_test)
        wrong_pred = find_error(features_test, labels_test)
        f = np.bincount(wrong_pred.astype(int))
        #print (sorted(zip(range(len(f)), f), key=lambda x: x[1]))
        print ("freq. wrong preds ", np.bincount(wrong_pred.astype(int)))
        print("Test loss = {:.3f}".format(test_loss))
        print("Test accuracy = {:.3f}".format(test_acc))
            
        
print("total training time = {:.1f} s".format(time() - train_start_time))
EPOCH 1 ...
Training loss = 2.382   Validation loss = 2.387
Training accuracy = 0.405   Validation accuracy = 0.403
time taken = 124.0 s

EPOCH 2 ...
Training loss = 0.791   Validation loss = 0.822
Training accuracy = 0.777   Validation accuracy = 0.763
time taken = 126.8 s

EPOCH 3 ...
Training loss = 0.395   Validation loss = 0.426
Training accuracy = 0.899   Validation accuracy = 0.891
time taken = 126.7 s

EPOCH 4 ...
Training loss = 0.221   Validation loss = 0.255
Training accuracy = 0.939   Validation accuracy = 0.929
time taken = 119.0 s

EPOCH 5 ...
Training loss = 0.152   Validation loss = 0.187
Training accuracy = 0.960   Validation accuracy = 0.951
time taken = 124.7 s

EPOCH 6 ...
Training loss = 0.102   Validation loss = 0.138
Training accuracy = 0.979   Validation accuracy = 0.966
time taken = 118.6 s

EPOCH 7 ...
Training loss = 0.080   Validation loss = 0.116
Training accuracy = 0.981   Validation accuracy = 0.971
time taken = 118.5 s

EPOCH 8 ...
Training loss = 0.058   Validation loss = 0.094
Training accuracy = 0.986   Validation accuracy = 0.975
time taken = 118.2 s

EPOCH 9 ...
Training loss = 0.049   Validation loss = 0.087
Training accuracy = 0.988   Validation accuracy = 0.976
time taken = 118.0 s

EPOCH 10 ...
Training loss = 0.035   Validation loss = 0.073
Training accuracy = 0.993   Validation accuracy = 0.982
time taken = 118.1 s

freq. wrong preds  [24 18 41 33 47 78 25 68 33 12 10 15 25  5 29  5  1  4 75 29 11 35  9 23 30
 64 20 31 20  1 68 13  4  4  4 22 10  4 31  3 32 10  5]
Test loss = 0.377
Test accuracy = 0.918
total training time = 1235.7 s
In [36]:
# dump the trends into a file
import pandas as pd
trends = {'training_loss': train_loss_epoch, 'validation_loss': valid_loss_epoch,
            'training_accuracy': train_acc_epoch, 'validation_accuracy': valid_acc_epoch}
df_trend = pd.DataFrame(trends)
df_trend['Epoch'] = np.arange(df_trend.shape[0]) + 1
df_trend = df_trend.set_index('Epoch')
In [37]:
df_trend[['training_loss', 'validation_loss']].plot()
Out[37]:
<matplotlib.axes._subplots.AxesSubplot at 0x141799b70>
In [38]:
df_trend[['training_accuracy', 'validation_accuracy']].plot()
Out[38]:
<matplotlib.axes._subplots.AxesSubplot at 0x139bf7d30>

Question 4

How did you train your model? (Type of optimizer, batch size, epochs, hyperparameters, etc.)

Answer:

I have used the Adam optimizer with a batch size of 128. The main hyperparameter I have tuned is the dropout percentage. I have a strong inception network that could easily fit the training databut performed slightly worse on the validation set. The dropout percentage control the overfitting and keeps the validation loss close to the training loss.

Question 5

What approach did you take in coming up with a solution to this problem? It may have been a process of trial and error, in which case, outline the steps you took to get to the final solution and why you chose those steps. Perhaps your solution involved an already well known implementation or architecture. In this case, discuss why you think this is suitable for the current problem.

Answer:

I started out with the LeNet structure that gave me a 99% accuracy on the validation set and 91% accuracy on the held out test set. Since the validation accuracy was almost 100%, there was not much improvement necessary on the network. However, I thought the test set may be fundamentally different from the training and validation set for the poor performance and hence generated new data by translating and rotating the training data. However, that did not improve the performance on the test data. I did not try to correct the imbalance of the training labels because I noticed one of the poorest performing label on the test data had one of the most training data available (Speed limit (80km/h)). Finally I implemented an Inception network because I wanted to play with the construction of complex networks, though I did not expect any gains from it.

Step 3: Test a Model on New Images

Take several pictures of traffic signs that you find on the web or around you (at least five), and run them through your classifier on your computer to produce example results. The classifier might not recognize some local signs but it could prove interesting nonetheless.

You may find signnames.csv useful as it contains mappings from the class id (integer) to the actual sign name.

Implementation

Use the code cell (or multiple code cells, if necessary) to implement the first step of your project. Once you have completed your implementation and are satisfied with the results, be sure to thoroughly answer the questions that follow.

In [39]:
### Load the images and plot them here.
### Feel free to use as many code cells as needed.

Question 6

Choose five candidate images of traffic signs and provide them in the report. Are there any particular qualities of the image(s) that might make classification difficult? It could be helpful to plot the images in the notebook.

Answer:

In [40]:
### Run the predictions here.
### Feel free to use as many code cells as needed.
In [42]:
from scipy.ndimage import imread
from scipy.misc import imresize
In [48]:
import glob
In [62]:
pics = glob.glob("*.png")
In [63]:
n = len(pics)
images = []
print ("number of images = ", n)
plt.figure(figsize=[4, 20])
for i in range(n):
    image = imread(pics[i])
    image = imresize(image, (32, 32, 3))
    images.append(image)
    plt.subplot(n, 1, i+1)
    plt.imshow(image)
    plt.axis('off')
number of images =  8
In [64]:
# preprocess the new images

new_images = np.array(images)
new_images = rgb2gray(new_images)
new_images = new_images.reshape(len(new_images), -1)
new_images = min_max_scale(new_images)
print (new_images.shape)
(8, 1024)
In [65]:
# restore saved session

sess = tf.Session()
new_saver = tf.train.import_meta_graph('my_model.meta')
new_saver.restore(sess, tf.train.latest_checkpoint('./'))
In [66]:
# make prediction

batch_x = new_images
predictions = sess.run(tf.argmax(fc2, 1), feed_dict={X: batch_x, keep_prob: 1.0})
print (predictions)
[22 22 23 26 11 28 29 25]
In [67]:
signs = pd.read_csv("signnames.csv")
In [68]:
signs.SignName[predictions]
Out[68]:
22                               Bumpy road
22                               Bumpy road
23                            Slippery road
26                          Traffic signals
11    Right-of-way at the next intersection
28                        Children crossing
29                        Bicycles crossing
25                                Road work
Name: SignName, dtype: object

Question 7

Is your model able to perform equally well on captured pictures when compared to testing on the dataset? The simplest way to do this check the accuracy of the predictions. For example, if the model predicted 1 out of 5 signs correctly, it's 20% accurate.

NOTE: You could check the accuracy manually by using signnames.csv (same directory). This file has a mapping from the class id (0-42) to the corresponding sign name. So, you could take the class id the model outputs, lookup the name in signnames.csv and see if it matches the sign from the image.

Answer:

Only 3 out of 8 signs are classified correctly.

In [41]:
### Visualize the softmax probabilities here.
### Feel free to use as many code cells as needed.

Question 8

Use the model's softmax probabilities to visualize the certainty of its predictions, tf.nn.top_k could prove helpful here. Which predictions is the model certain of? Uncertain? If the model was incorrect in its initial prediction, does the correct prediction appear in the top k? (k should be 5 at most)

tf.nn.top_k will return the values and indices (class ids) of the top k predictions. So if k=3, for each sign, it'll return the 3 largest probabilities (out of a possible 43) and the correspoding class ids.

Take this numpy array as an example:

# (5, 6) array
a = np.array([[ 0.24879643,  0.07032244,  0.12641572,  0.34763842,  0.07893497,
         0.12789202],
       [ 0.28086119,  0.27569815,  0.08594638,  0.0178669 ,  0.18063401,
         0.15899337],
       [ 0.26076848,  0.23664738,  0.08020603,  0.07001922,  0.1134371 ,
         0.23892179],
       [ 0.11943333,  0.29198961,  0.02605103,  0.26234032,  0.1351348 ,
         0.16505091],
       [ 0.09561176,  0.34396535,  0.0643941 ,  0.16240774,  0.24206137,
         0.09155967]])

Running it through sess.run(tf.nn.top_k(tf.constant(a), k=3)) produces:

TopKV2(values=array([[ 0.34763842,  0.24879643,  0.12789202],
       [ 0.28086119,  0.27569815,  0.18063401],
       [ 0.26076848,  0.23892179,  0.23664738],
       [ 0.29198961,  0.26234032,  0.16505091],
       [ 0.34396535,  0.24206137,  0.16240774]]), indices=array([[3, 0, 5],
       [0, 1, 4],
       [0, 5, 1],
       [1, 3, 5],
       [1, 4, 3]], dtype=int32))

Looking just at the first row we get [ 0.34763842, 0.24879643, 0.12789202], you can confirm these are the 3 largest probabilities in a. You'll also notice [3, 0, 5] are the corresponding indices.

Answer:

In [75]:
val = sess.run(tf.nn.top_k(fc2, k=5), feed_dict={X: batch_x, keep_prob: 1.0})
In [76]:
val[1]
Out[76]:
array([[22, 29, 25, 24, 26],
       [22, 38, 25, 20, 34],
       [23, 19, 11,  9, 35],
       [26, 18, 24, 22, 27],
       [11, 40, 27, 30, 12],
       [28, 20, 41, 11, 23],
       [29, 28, 23, 35, 20],
       [25, 31, 21, 18, 24]], dtype=int32)
In [79]:
for v in val[1]:
    print (v)
    print (signs.SignName[v])
[22 29 25 24 26]
22                   Bumpy road
29            Bicycles crossing
25                    Road work
24    Road narrows on the right
26              Traffic signals
Name: SignName, dtype: object
[22 38 25 20 34]
22                      Bumpy road
38                      Keep right
25                       Road work
20    Dangerous curve to the right
34                 Turn left ahead
Name: SignName, dtype: object
[23 19 11  9 35]
23                            Slippery road
19              Dangerous curve to the left
11    Right-of-way at the next intersection
9                                No passing
35                               Ahead only
Name: SignName, dtype: object
[26 18 24 22 27]
26              Traffic signals
18              General caution
24    Road narrows on the right
22                   Bumpy road
27                  Pedestrians
Name: SignName, dtype: object
[11 40 27 30 12]
11    Right-of-way at the next intersection
40                     Roundabout mandatory
27                              Pedestrians
30                       Beware of ice/snow
12                            Priority road
Name: SignName, dtype: object
[28 20 41 11 23]
28                        Children crossing
20             Dangerous curve to the right
41                        End of no passing
11    Right-of-way at the next intersection
23                            Slippery road
Name: SignName, dtype: object
[29 28 23 35 20]
29               Bicycles crossing
28               Children crossing
23                   Slippery road
35                      Ahead only
20    Dangerous curve to the right
Name: SignName, dtype: object
[25 31 21 18 24]
25                    Road work
31        Wild animals crossing
21                 Double curve
18              General caution
24    Road narrows on the right
Name: SignName, dtype: object

For signs the classifier has seen in training, the correct prediction lies within top 5. For the first three signs, the predictions are always wrong because the classifier has not seen those signs.

Note: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to \n", "File -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission.